-
Notifications
You must be signed in to change notification settings - Fork 2.8k
[processor/metricstransform] Fix aggregation of exponential histograms #39143
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[processor/metricstransform] Fix aggregation of exponential histograms #39143
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall makes sense to me, but I think there should be a test for each possible case of mismatched bucket count.
// Note that groupExponentialHistogramDataPoints() has already ensured that we only try | ||
// to merge exponential histograms with matching Scale and Positive/Negative Offsets, | ||
// so the corresponding array items in BucketCounts have the same bucket boundaries. | ||
// However, the number of buckets may differ depending on what values have been observed. | ||
for b := 0; b < dps.At(i).Negative().BucketCounts().Len(); b++ { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see how this should handle cases where negatives.Len() > dps.At(i).Negatives.BucketCounts().Len(), but I think it should be covered in tests to ensure that future maintenance doesn't break the behavior accidentally.
It looks like the codeowner allowlist has been fixed on main, so a merge/rebase should resolve that failing check. |
A panic would occur if the histograms being merged had different numbers of populated entries in the BucketCounts array. Also the counts for the Zero buckets were not being merged.
For negative buckets, and for a second datapoint with fewer instead of more bucket count array items than the first.
d492dea
to
ab4293e
Compare
@dehaansa Thanks! I've added some more cases to the test. Is it normal for the CI workflows to take this long, or is there something stuck? |
For folks that aren't yet members of the otel community CI has to be approved before it runs, sorry! I've approved this run, feel free to ping me again if you need additional runs approved. |
@enisoc we'll give the component codeowners some time to respond, and if they don't get back within a couple days we'll go ahead and mark this ready to merge. |
open-telemetry#39143) #### Description A panic would occur if the histograms being merged had different numbers of populated entries in the BucketCounts array. Also the counts for the Zero buckets were not being merged. Example config: ``` processors: metricstransform: transforms: - action: combine aggregation_type: sum include: traces.span.metrics.duration new_name: traces.span.metrics.duration ``` Example panic: ``` panic: runtime error: index out of range [133] with length 133 goroutine 177 [running]: go.opentelemetry.io/collector/pdata/pcommon.UInt64Slice.At(...) go.opentelemetry.io/collector/[email protected]/pcommon/generated_uint64slice.go:55 github.com/open-telemetry/opentelemetry-collector-contrib/internal/coreinternal/aggregateutil.mergeExponentialHistogramDataPoints(0x40014b57e0?, {0x4007bc7220?, 0x400744fee8?}) github.com/open-telemetry/opentelemetry-collector-contrib/internal/[email protected]/aggregateutil/aggregate.go:309 +0x888 github.com/open-telemetry/opentelemetry-collector-contrib/internal/coreinternal/aggregateutil.MergeDataPoints({0x4001a435c0?, 0x400744fee8?}, {0x40011b4078?, 0x60?}, {0x0?, 0x0?, 0x0?, 0x4006a73410?}) github.com/open-telemetry/opentelemetry-collector-contrib/internal/[email protected]/aggregateutil/aggregate.go:96 +0x208 github.com/open-telemetry/opentelemetry-collector-contrib/processor/metricstransformprocessor.groupMetrics({0x4001b048a0?, 0x400744fed8?}, {0x40011b4078, 0x3}, {0x4001a435c0?, 0x400744fee8?}) github.com/open-telemetry/opentelemetry-collector-contrib/processor/[email protected]/metrics_transform_processor_otlp.go:451 +0xb8 github.com/open-telemetry/opentelemetry-collector-contrib/processor/metricstransformprocessor.combine({{0xadce910, 0x40018a0fd8}, {0x40011b4066, 0x7}, {0x4000777800, 0x1c}, 0x0, {0x40011b4078, 0x3}, {0x0, ...}, ...}, ...) github.com/open-telemetry/opentelemetry-collector-contrib/processor/[email protected]/metrics_transform_processor_otlp.go:438 +0x2c0 github.com/open-telemetry/opentelemetry-collector-contrib/processor/metricstransformprocessor.(*metricsTransformProcessor).processMetrics.func1.1({0x4000c0fd50?, 0x4007038b80?}) github.com/open-telemetry/opentelemetry-collector-contrib/processor/[email protected]/metrics_transform_processor_otlp.go:258 +0x4e8 go.opentelemetry.io/collector/pdata/pmetric.ScopeMetricsSlice.RemoveIf({0x40041de338?, 0x4007038b80?}, 0x4000e3f5f8) go.opentelemetry.io/collector/[email protected]/pmetric/generated_scopemetricsslice.go:111 +0x80 github.com/open-telemetry/opentelemetry-collector-contrib/processor/metricstransformprocessor.(*metricsTransformProcessor).processMetrics.func1({0x40041de300?, 0x4007038b80?}) github.com/open-telemetry/opentelemetry-collector-contrib/processor/[email protected]/metrics_transform_processor_otlp.go:235 +0x64 go.opentelemetry.io/collector/pdata/pmetric.ResourceMetricsSlice.RemoveIf({0x4000e3c378?, 0x4007038b80?}, 0x4000e3f6e8) go.opentelemetry.io/collector/[email protected]/pmetric/generated_resourcemetricsslice.go:111 +0x80 github.com/open-telemetry/opentelemetry-collector-contrib/processor/metricstransformprocessor.(*metricsTransformProcessor).processMetrics(0x4000e3f748?, {0x50ac40c?, 0x10?}, {0x4000e3c378?, 0x4007038b80?}) github.com/open-telemetry/opentelemetry-collector-contrib/processor/[email protected]/metrics_transform_processor_otlp.go:234 +0x74 go.opentelemetry.io/collector/processor/processorhelper.NewMetrics.func1({0xad9c090, 0x11b475c0}, {0x4000e3c378?, 0x4007038b80?}) go.opentelemetry.io/collector/[email protected]/processorhelper/metrics.go:55 +0xfc go.opentelemetry.io/collector/consumer.ConsumeMetricsFunc.ConsumeMetrics(...) go.opentelemetry.io/collector/[email protected]/metrics.go:27 go.opentelemetry.io/collector/processor/processorhelper.NewMetrics.func1({0xad9c090, 0x11b475c0}, {0x4000e3c378?, 0x4007038b80?}) go.opentelemetry.io/collector/[email protected]/processorhelper/metrics.go:66 +0x21c go.opentelemetry.io/collector/consumer.ConsumeMetricsFunc.ConsumeMetrics(...) go.opentelemetry.io/collector/[email protected]/metrics.go:27 go.opentelemetry.io/collector/processor/processorhelper.NewMetrics.func1({0xad9c090, 0x11b475c0}, {0x4000e3c2e8?, 0x4007038390?}) go.opentelemetry.io/collector/[email protected]/processorhelper/metrics.go:66 +0x21c go.opentelemetry.io/collector/consumer.ConsumeMetricsFunc.ConsumeMetrics(...) go.opentelemetry.io/collector/[email protected]/metrics.go:27 go.opentelemetry.io/collector/processor/processorhelper.NewMetrics.func1({0xad9c090, 0x11b475c0}, {0x4000e3c2e8?, 0x4007038390?}) go.opentelemetry.io/collector/[email protected]/processorhelper/metrics.go:66 +0x21c go.opentelemetry.io/collector/consumer.ConsumeMetricsFunc.ConsumeMetrics(...) go.opentelemetry.io/collector/[email protected]/metrics.go:27 go.opentelemetry.io/collector/processor/processorhelper.NewMetrics.func1({0xad9c090, 0x11b475c0}, {0x4000e3c2e8?, 0x4007038390?}) go.opentelemetry.io/collector/[email protected]/processorhelper/metrics.go:66 +0x21c go.opentelemetry.io/collector/consumer.ConsumeMetricsFunc.ConsumeMetrics(...) go.opentelemetry.io/collector/[email protected]/metrics.go:27 go.opentelemetry.io/collector/consumer.ConsumeMetricsFunc.ConsumeMetrics(...) go.opentelemetry.io/collector/[email protected]/metrics.go:27 go.opentelemetry.io/collector/internal/fanoutconsumer.(*metricsConsumer).ConsumeMetrics(0x400197bd40, {0xad9c090, 0x11b475c0}, {0x4000e3c2e8?, 0x4007038390?}) go.opentelemetry.io/collector/internal/[email protected]/metrics.go:60 +0x1ec github.com/open-telemetry/opentelemetry-collector-contrib/connector/spanmetricsconnector.(*connectorImp).exportMetrics(0x4000203a40, {0xad9c090, 0x11b475c0}) github.com/open-telemetry/opentelemetry-collector-contrib/connector/[email protected]/connector.go:258 +0x110 github.com/open-telemetry/opentelemetry-collector-contrib/connector/spanmetricsconnector.(*connectorImp).Start.func1() github.com/open-telemetry/opentelemetry-collector-contrib/connector/[email protected]/connector.go:213 +0x48 created by github.com/open-telemetry/opentelemetry-collector-contrib/connector/spanmetricsconnector.(*connectorImp).Start in goroutine 1 github.com/open-telemetry/opentelemetry-collector-contrib/connector/[email protected]/connector.go:207 +0xa8 ``` --------- Co-authored-by: Antoine Toulme <[email protected]>
open-telemetry#39143) #### Description A panic would occur if the histograms being merged had different numbers of populated entries in the BucketCounts array. Also the counts for the Zero buckets were not being merged. Example config: ``` processors: metricstransform: transforms: - action: combine aggregation_type: sum include: traces.span.metrics.duration new_name: traces.span.metrics.duration ``` Example panic: ``` panic: runtime error: index out of range [133] with length 133 goroutine 177 [running]: go.opentelemetry.io/collector/pdata/pcommon.UInt64Slice.At(...) go.opentelemetry.io/collector/[email protected]/pcommon/generated_uint64slice.go:55 github.com/open-telemetry/opentelemetry-collector-contrib/internal/coreinternal/aggregateutil.mergeExponentialHistogramDataPoints(0x40014b57e0?, {0x4007bc7220?, 0x400744fee8?}) github.com/open-telemetry/opentelemetry-collector-contrib/internal/[email protected]/aggregateutil/aggregate.go:309 +0x888 github.com/open-telemetry/opentelemetry-collector-contrib/internal/coreinternal/aggregateutil.MergeDataPoints({0x4001a435c0?, 0x400744fee8?}, {0x40011b4078?, 0x60?}, {0x0?, 0x0?, 0x0?, 0x4006a73410?}) github.com/open-telemetry/opentelemetry-collector-contrib/internal/[email protected]/aggregateutil/aggregate.go:96 +0x208 github.com/open-telemetry/opentelemetry-collector-contrib/processor/metricstransformprocessor.groupMetrics({0x4001b048a0?, 0x400744fed8?}, {0x40011b4078, 0x3}, {0x4001a435c0?, 0x400744fee8?}) github.com/open-telemetry/opentelemetry-collector-contrib/processor/[email protected]/metrics_transform_processor_otlp.go:451 +0xb8 github.com/open-telemetry/opentelemetry-collector-contrib/processor/metricstransformprocessor.combine({{0xadce910, 0x40018a0fd8}, {0x40011b4066, 0x7}, {0x4000777800, 0x1c}, 0x0, {0x40011b4078, 0x3}, {0x0, ...}, ...}, ...) github.com/open-telemetry/opentelemetry-collector-contrib/processor/[email protected]/metrics_transform_processor_otlp.go:438 +0x2c0 github.com/open-telemetry/opentelemetry-collector-contrib/processor/metricstransformprocessor.(*metricsTransformProcessor).processMetrics.func1.1({0x4000c0fd50?, 0x4007038b80?}) github.com/open-telemetry/opentelemetry-collector-contrib/processor/[email protected]/metrics_transform_processor_otlp.go:258 +0x4e8 go.opentelemetry.io/collector/pdata/pmetric.ScopeMetricsSlice.RemoveIf({0x40041de338?, 0x4007038b80?}, 0x4000e3f5f8) go.opentelemetry.io/collector/[email protected]/pmetric/generated_scopemetricsslice.go:111 +0x80 github.com/open-telemetry/opentelemetry-collector-contrib/processor/metricstransformprocessor.(*metricsTransformProcessor).processMetrics.func1({0x40041de300?, 0x4007038b80?}) github.com/open-telemetry/opentelemetry-collector-contrib/processor/[email protected]/metrics_transform_processor_otlp.go:235 +0x64 go.opentelemetry.io/collector/pdata/pmetric.ResourceMetricsSlice.RemoveIf({0x4000e3c378?, 0x4007038b80?}, 0x4000e3f6e8) go.opentelemetry.io/collector/[email protected]/pmetric/generated_resourcemetricsslice.go:111 +0x80 github.com/open-telemetry/opentelemetry-collector-contrib/processor/metricstransformprocessor.(*metricsTransformProcessor).processMetrics(0x4000e3f748?, {0x50ac40c?, 0x10?}, {0x4000e3c378?, 0x4007038b80?}) github.com/open-telemetry/opentelemetry-collector-contrib/processor/[email protected]/metrics_transform_processor_otlp.go:234 +0x74 go.opentelemetry.io/collector/processor/processorhelper.NewMetrics.func1({0xad9c090, 0x11b475c0}, {0x4000e3c378?, 0x4007038b80?}) go.opentelemetry.io/collector/[email protected]/processorhelper/metrics.go:55 +0xfc go.opentelemetry.io/collector/consumer.ConsumeMetricsFunc.ConsumeMetrics(...) go.opentelemetry.io/collector/[email protected]/metrics.go:27 go.opentelemetry.io/collector/processor/processorhelper.NewMetrics.func1({0xad9c090, 0x11b475c0}, {0x4000e3c378?, 0x4007038b80?}) go.opentelemetry.io/collector/[email protected]/processorhelper/metrics.go:66 +0x21c go.opentelemetry.io/collector/consumer.ConsumeMetricsFunc.ConsumeMetrics(...) go.opentelemetry.io/collector/[email protected]/metrics.go:27 go.opentelemetry.io/collector/processor/processorhelper.NewMetrics.func1({0xad9c090, 0x11b475c0}, {0x4000e3c2e8?, 0x4007038390?}) go.opentelemetry.io/collector/[email protected]/processorhelper/metrics.go:66 +0x21c go.opentelemetry.io/collector/consumer.ConsumeMetricsFunc.ConsumeMetrics(...) go.opentelemetry.io/collector/[email protected]/metrics.go:27 go.opentelemetry.io/collector/processor/processorhelper.NewMetrics.func1({0xad9c090, 0x11b475c0}, {0x4000e3c2e8?, 0x4007038390?}) go.opentelemetry.io/collector/[email protected]/processorhelper/metrics.go:66 +0x21c go.opentelemetry.io/collector/consumer.ConsumeMetricsFunc.ConsumeMetrics(...) go.opentelemetry.io/collector/[email protected]/metrics.go:27 go.opentelemetry.io/collector/processor/processorhelper.NewMetrics.func1({0xad9c090, 0x11b475c0}, {0x4000e3c2e8?, 0x4007038390?}) go.opentelemetry.io/collector/[email protected]/processorhelper/metrics.go:66 +0x21c go.opentelemetry.io/collector/consumer.ConsumeMetricsFunc.ConsumeMetrics(...) go.opentelemetry.io/collector/[email protected]/metrics.go:27 go.opentelemetry.io/collector/consumer.ConsumeMetricsFunc.ConsumeMetrics(...) go.opentelemetry.io/collector/[email protected]/metrics.go:27 go.opentelemetry.io/collector/internal/fanoutconsumer.(*metricsConsumer).ConsumeMetrics(0x400197bd40, {0xad9c090, 0x11b475c0}, {0x4000e3c2e8?, 0x4007038390?}) go.opentelemetry.io/collector/internal/[email protected]/metrics.go:60 +0x1ec github.com/open-telemetry/opentelemetry-collector-contrib/connector/spanmetricsconnector.(*connectorImp).exportMetrics(0x4000203a40, {0xad9c090, 0x11b475c0}) github.com/open-telemetry/opentelemetry-collector-contrib/connector/[email protected]/connector.go:258 +0x110 github.com/open-telemetry/opentelemetry-collector-contrib/connector/spanmetricsconnector.(*connectorImp).Start.func1() github.com/open-telemetry/opentelemetry-collector-contrib/connector/[email protected]/connector.go:213 +0x48 created by github.com/open-telemetry/opentelemetry-collector-contrib/connector/spanmetricsconnector.(*connectorImp).Start in goroutine 1 github.com/open-telemetry/opentelemetry-collector-contrib/connector/[email protected]/connector.go:207 +0xa8 ``` --------- Co-authored-by: Antoine Toulme <[email protected]>
open-telemetry#39143) #### Description A panic would occur if the histograms being merged had different numbers of populated entries in the BucketCounts array. Also the counts for the Zero buckets were not being merged. Example config: ``` processors: metricstransform: transforms: - action: combine aggregation_type: sum include: traces.span.metrics.duration new_name: traces.span.metrics.duration ``` Example panic: ``` panic: runtime error: index out of range [133] with length 133 goroutine 177 [running]: go.opentelemetry.io/collector/pdata/pcommon.UInt64Slice.At(...) go.opentelemetry.io/collector/[email protected]/pcommon/generated_uint64slice.go:55 github.com/open-telemetry/opentelemetry-collector-contrib/internal/coreinternal/aggregateutil.mergeExponentialHistogramDataPoints(0x40014b57e0?, {0x4007bc7220?, 0x400744fee8?}) github.com/open-telemetry/opentelemetry-collector-contrib/internal/[email protected]/aggregateutil/aggregate.go:309 +0x888 github.com/open-telemetry/opentelemetry-collector-contrib/internal/coreinternal/aggregateutil.MergeDataPoints({0x4001a435c0?, 0x400744fee8?}, {0x40011b4078?, 0x60?}, {0x0?, 0x0?, 0x0?, 0x4006a73410?}) github.com/open-telemetry/opentelemetry-collector-contrib/internal/[email protected]/aggregateutil/aggregate.go:96 +0x208 github.com/open-telemetry/opentelemetry-collector-contrib/processor/metricstransformprocessor.groupMetrics({0x4001b048a0?, 0x400744fed8?}, {0x40011b4078, 0x3}, {0x4001a435c0?, 0x400744fee8?}) github.com/open-telemetry/opentelemetry-collector-contrib/processor/[email protected]/metrics_transform_processor_otlp.go:451 +0xb8 github.com/open-telemetry/opentelemetry-collector-contrib/processor/metricstransformprocessor.combine({{0xadce910, 0x40018a0fd8}, {0x40011b4066, 0x7}, {0x4000777800, 0x1c}, 0x0, {0x40011b4078, 0x3}, {0x0, ...}, ...}, ...) github.com/open-telemetry/opentelemetry-collector-contrib/processor/[email protected]/metrics_transform_processor_otlp.go:438 +0x2c0 github.com/open-telemetry/opentelemetry-collector-contrib/processor/metricstransformprocessor.(*metricsTransformProcessor).processMetrics.func1.1({0x4000c0fd50?, 0x4007038b80?}) github.com/open-telemetry/opentelemetry-collector-contrib/processor/[email protected]/metrics_transform_processor_otlp.go:258 +0x4e8 go.opentelemetry.io/collector/pdata/pmetric.ScopeMetricsSlice.RemoveIf({0x40041de338?, 0x4007038b80?}, 0x4000e3f5f8) go.opentelemetry.io/collector/[email protected]/pmetric/generated_scopemetricsslice.go:111 +0x80 github.com/open-telemetry/opentelemetry-collector-contrib/processor/metricstransformprocessor.(*metricsTransformProcessor).processMetrics.func1({0x40041de300?, 0x4007038b80?}) github.com/open-telemetry/opentelemetry-collector-contrib/processor/[email protected]/metrics_transform_processor_otlp.go:235 +0x64 go.opentelemetry.io/collector/pdata/pmetric.ResourceMetricsSlice.RemoveIf({0x4000e3c378?, 0x4007038b80?}, 0x4000e3f6e8) go.opentelemetry.io/collector/[email protected]/pmetric/generated_resourcemetricsslice.go:111 +0x80 github.com/open-telemetry/opentelemetry-collector-contrib/processor/metricstransformprocessor.(*metricsTransformProcessor).processMetrics(0x4000e3f748?, {0x50ac40c?, 0x10?}, {0x4000e3c378?, 0x4007038b80?}) github.com/open-telemetry/opentelemetry-collector-contrib/processor/[email protected]/metrics_transform_processor_otlp.go:234 +0x74 go.opentelemetry.io/collector/processor/processorhelper.NewMetrics.func1({0xad9c090, 0x11b475c0}, {0x4000e3c378?, 0x4007038b80?}) go.opentelemetry.io/collector/[email protected]/processorhelper/metrics.go:55 +0xfc go.opentelemetry.io/collector/consumer.ConsumeMetricsFunc.ConsumeMetrics(...) go.opentelemetry.io/collector/[email protected]/metrics.go:27 go.opentelemetry.io/collector/processor/processorhelper.NewMetrics.func1({0xad9c090, 0x11b475c0}, {0x4000e3c378?, 0x4007038b80?}) go.opentelemetry.io/collector/[email protected]/processorhelper/metrics.go:66 +0x21c go.opentelemetry.io/collector/consumer.ConsumeMetricsFunc.ConsumeMetrics(...) go.opentelemetry.io/collector/[email protected]/metrics.go:27 go.opentelemetry.io/collector/processor/processorhelper.NewMetrics.func1({0xad9c090, 0x11b475c0}, {0x4000e3c2e8?, 0x4007038390?}) go.opentelemetry.io/collector/[email protected]/processorhelper/metrics.go:66 +0x21c go.opentelemetry.io/collector/consumer.ConsumeMetricsFunc.ConsumeMetrics(...) go.opentelemetry.io/collector/[email protected]/metrics.go:27 go.opentelemetry.io/collector/processor/processorhelper.NewMetrics.func1({0xad9c090, 0x11b475c0}, {0x4000e3c2e8?, 0x4007038390?}) go.opentelemetry.io/collector/[email protected]/processorhelper/metrics.go:66 +0x21c go.opentelemetry.io/collector/consumer.ConsumeMetricsFunc.ConsumeMetrics(...) go.opentelemetry.io/collector/[email protected]/metrics.go:27 go.opentelemetry.io/collector/processor/processorhelper.NewMetrics.func1({0xad9c090, 0x11b475c0}, {0x4000e3c2e8?, 0x4007038390?}) go.opentelemetry.io/collector/[email protected]/processorhelper/metrics.go:66 +0x21c go.opentelemetry.io/collector/consumer.ConsumeMetricsFunc.ConsumeMetrics(...) go.opentelemetry.io/collector/[email protected]/metrics.go:27 go.opentelemetry.io/collector/consumer.ConsumeMetricsFunc.ConsumeMetrics(...) go.opentelemetry.io/collector/[email protected]/metrics.go:27 go.opentelemetry.io/collector/internal/fanoutconsumer.(*metricsConsumer).ConsumeMetrics(0x400197bd40, {0xad9c090, 0x11b475c0}, {0x4000e3c2e8?, 0x4007038390?}) go.opentelemetry.io/collector/internal/[email protected]/metrics.go:60 +0x1ec github.com/open-telemetry/opentelemetry-collector-contrib/connector/spanmetricsconnector.(*connectorImp).exportMetrics(0x4000203a40, {0xad9c090, 0x11b475c0}) github.com/open-telemetry/opentelemetry-collector-contrib/connector/[email protected]/connector.go:258 +0x110 github.com/open-telemetry/opentelemetry-collector-contrib/connector/spanmetricsconnector.(*connectorImp).Start.func1() github.com/open-telemetry/opentelemetry-collector-contrib/connector/[email protected]/connector.go:213 +0x48 created by github.com/open-telemetry/opentelemetry-collector-contrib/connector/spanmetricsconnector.(*connectorImp).Start in goroutine 1 github.com/open-telemetry/opentelemetry-collector-contrib/connector/[email protected]/connector.go:207 +0xa8 ``` --------- Co-authored-by: Antoine Toulme <[email protected]>
Description
A panic would occur if the histograms being merged had different numbers of populated entries in the BucketCounts array.
Also the counts for the Zero buckets were not being merged.
Example config:
Example panic: